Multilinear Maximum Distance Embedding Via L1-Norm Optimization
نویسندگان
چکیده
Dimensionality reduction plays an important role in many machine learning and pattern recognition tasks. In this paper, we present a novel dimensionality reduction algorithm called multilinear maximum distance embedding (MDE), which includes three key components. To preserve the local geometry and discriminant information in the embedded space, MDE utilizes a new objective function, which aims to maximize the distances between some particular pairs of data points, such as the distances between nearby points and the distances between data points from different classes. To make the mapping of new data points straightforward, and more importantly, to keep the natural tensor structure of high-order data, MDE integrates multilinear techniques to learn the transformation matrices sequentially. To provide reasonable and stable embedding results, MDE employs the L1-norm, which is more robust to outliers, to measure the dissimilarity between data points. Experiments on various datasets demonstrate that MDE achieves good embedding results of high-order data for classification tasks.
منابع مشابه
Norm Attaining Multilinear Forms on L1(μ)
Given an arbitrary measure μ, this study shows that the set of norm attaining multilinear forms is not dense in the space of all continuous multilinear forms on L1 μ . However, we have the density if and only if μ is purely atomic. Furthermore, the study presents an example of a Banach space X in which the set of norm attaining operators from X into X∗ is dense in the space of all bounded linea...
متن کاملDimension Reduction in the l1 norm
The Johnson-Lindenstrauss Lemma shows that any set of n points in Euclidean space can be mapped linearly down to O((log n)/ǫ) dimensions such that all pairwise distances are distorted by at most 1 + ǫ. We study the following basic question: Does there exist an analogue of the JohnsonLindenstrauss Lemma for the l1 norm? Note that Johnson-Lindenstrauss Lemma gives a linear embedding which is inde...
متن کاملMultitask learning meets tensor factorization: task imputation via convex optimization
We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinea...
متن کامل0 Sparse Inverse Covariance Estimation
Recently, there has been focus on penalized loglikelihood covariance estimation for sparse inverse covariance (precision) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex l1 norm. However, the best estimator performance is not always achieved with this penalty. The most natural sparsity promoting “norm” is the non-convex l0 penalty but its lack ...
متن کاملOn Inverse Problems of Optimum Perfect Matching
As far as we know, for most polynomially solvable network optimization problems, their inverse problems under l1 or l∞ norm have been studied, except the inverse maximum-weight matching problem in non-bipartite networks. In this paper we discuss the inverse problem of maximum-weight perfect matching in a non-bipartite network under l1 and l∞ norms. It has been proved that the inverse maximum-we...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010